AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Raw Attention Pooling

# Raw Attention Pooling

Vit So400m Patch14 Siglip 384.webli
Apache-2.0
Vision Transformer model based on SigLIP architecture, containing only the image encoder part, utilizing raw attention pooling mechanism
Image Classification Transformers
V
timm
9,429
0
Vit Base Patch16 Siglip 256.webli I18n
Apache-2.0
ViT-B-16 vision Transformer model based on SigLIP, containing only the image encoder, utilizing raw attention pooling
Image Classification Transformers
V
timm
16
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase